1-Consciousness-Sense-Vision-Pattern Recognition-Representation

explicit representation

Neuron assemblies can hold essential knowledge about patterns {explicit representation}, using information not in implicit representation. Mind calculates explicit representation from implicit representation, using feature extraction or neural networks [Kobatake et al., 1998] [Logothetis and Pauls, 1995] [Logothetis et al., 1994] [Sheinberg and Logothetis, 2001].

implicit representation

Neuron or pixel sets can hold object image {implicit representation}, with no higher-level knowledge. Implicit representation samples intensities at positions at times, like bitmaps [Kobatake et al., 1998] [Logothetis and Pauls, 1995] [Logothetis et al., 1994] [Sheinberg and Logothetis, 2001].

generalized cone

Algorithms {generalized cone} can describe three-dimensional objects as conical shapes, with axis length/orientation and circle radius/orientation. Main and subsidiary cones can be solid, hollow, inverted, asymmetric, or symmetric. Cone surfaces have patterns and textures [Marr, 1982]. Cone descriptions can use three-dimensional Fourier spherical harmonics, which have volumes, centroids, inertia moments, and inertia products.

generalized cylinder

Algorithms {generalized cylinder} can describe three-dimensional objects as cylindrical shapes, with axis length/orientation and circle radius/orientation. Main and subsidiary cylinders can be solid, hollow, inverted, asymmetric, or symmetric. Cylindrical surfaces have patterns and textures. Cylinder descriptions can use three-dimensional Fourier spherical harmonics, which have volumes, centroids, inertia moments, and inertia products.

structural description

Representations can describe object parts and spatial relations {structural description}. Structure units can be three-dimensional generalized cylinders (Marr), three-dimensional geons (Biederman), or three-dimensional curved solids {superquadratics} (Pentland). Structural descriptions are only good for simple recognition {entry level recognition}, not for superstructures or substructures. Vision uses viewpoint-dependent recognition, not structural descriptions.

template

Shape representations {template} can hold information for mechanisms to use to replicate or recognize {template theory} {naive template theory}. Template is like memory, and mechanism is like recall. Template can be coded units, shape, image, model, prototype, or pattern. Artificial templates include clay or wax molds. Natural templates are DNA/RNA. Templates can be abstract-space vectors. Using templates requires templates for all viewpoints, and so many templates.

vector coding

Representations {vector coding} can be sense-receptor intensity patterns and/or brain-structure neuron outputs, which make feature vectors. Vector coding can identify rigid objects in Euclidean space. Vision uses non-metric projective geometry to find invariances by vector analysis [Staudt, 1847] [Veblen and Young, 1918]. Motor-representation middle and lower levels use code that indicates direction and amount.

Related Topics in Table of Contents

1-Consciousness-Sense-Vision-Pattern Recognition

Drawings

Drawings

Contents and Indexes of Topics, Names, and Works

Outline of Knowledge Database Home Page

Contents

Glossary

Topic Index

Name Index

Works Index

Searching

Search Form

Database Information, Disclaimer, Privacy Statement, and Rights

Description of Outline of Knowledge Database

Notation

Disclaimer

Copyright Not Claimed

Privacy Statement

References and Bibliography

Consciousness Bibliography

Technical Information

Date Modified: 2022.0225